专利摘要:

公开号:SE1000142A1
申请号:SE1000142
申请日:2010-02-15
公开日:2011-08-16
发明作者:Alexander Lindskog;Gustaf Pettersson;Ulf Holmstedt;Johan Windmark;Sami Niemi
申请人:Scalado Ab;
IPC主号:
专利说明:

Identifying a source area in the source image, where the source area has a set of coordinates, using the set of coordinates for the source area to, in response to said identification of said source area, identify a target area in the target image, and creating a digital image based on image data information from the target image, whereby image data information from the target area is seamlessly replaced with image data information from the source area.
According to a second aspect, a method of digital image manipulation is provided comprising: receiving a source image and a target image, identifying a source area in the source image, wherein the source area has a set of coordinates, using the set of coordinates of the source area to, in response to said identification of said source area, identifying a target area in the target image, and creating a digital image based on image data information from the source image, whereby image data information from the source area is seamlessly replaced with image data information from the target area.
Thus, a digital image can be created based on image data information from one of the target image, whereby image data information from the target area is seamlessly replaced with image data information from the source area, or the source image, whereby image data information from the source area is seamlessly replaced with image data information from the target area.
The method thus enables areas in digital images to be replaced seamlessly. The procedures allow areas with defects to be seamlessly replaced with areas from other images. For example, an area of an image depicting people with closed eyes can be replaced with a corresponding area from another (similar) image where the eyes of (the same) person are open, inter alia by allowing a user to replace parts of an image with same area from different images.
Preferably, the images are similar in that they depict the same scene (including the same photographic elements) and which are taken in succession with a fairly small temporal distance between the shots.
The method may further comprise receiving feature information regarding a feature in at least one of said source image and said target image, wherein at least one of said source image and said target image comprises said feature, and identifying said source area based on said feature.
The method may further comprise determining whether one of said source area and said target area meets a condition based on said feature information, and creating said digital image based on said determination. Thus, the created image can be determined to include an area or object comprising a predetermined feature.
The set of coordinates for the source area can be used so that a set of coordinates for the target area in the target image corresponds to the set of coordinates for the source area. Thus, the target area can be easily determined.
Identifying the source area may further comprise determining a compensating motion between at least a portion of said source image and at least a portion of said target image such that said compensating motion minimizes a difference between the source image and the target image.
Thus, the source image and the target image do not have to be perfectly (ie pixel by pixel) aligned.
The method may further comprise using a "thin plate spline" method to determine said compensation movement, thus a fast and efficient motion compensation can be achieved.
The detail measure may be based on data unit lengths for data units, "data units", in said candidate source area and said candidate target area.
Thus, a fast but at the same time precise detail measure can be used.
The method may further comprise soft mixing between the source image and the target image by using quad-tree optimization, thus providing fast and efficient soft mixing.
The method may further comprise replacing one of the target image and the source image with the created digital image. Thus, memory needs can be reduced.
The source image and the target image may have been encoded. The method may further comprise in the source image decoding only image data information representing the source area, and in the target image decoding only image data information representing the target area. Thus, fast image manipulation can be achieved.
The source area can correspond to a source object. The target area can correspond to a target object. The source object and target object can represent one and the same object.
Thus, object recognition can be used to improve the procedure.
The method may include receiving a signal identifying said source area. The signal may be associated with user input.
Thus, the source area can be identified by a user.
According to a third aspect, the present invention is realized by a digital image manipulation device comprising: means for receiving a source image and a target image, means for identifying a source area in the source image, wherein the source area has a set of coordinates, means for using the set of coordinates for the source area to, in response to said identification of said source area, identify a target area in the target image, and means for creating a digital image based on image data information from the target image, wherein image data information from the target area is seamlessly replaced with image data from the source area .
According to a fourth aspect, the present invention is realized by a digital image manipulation device comprising: means for receiving a source image and a target image, means for identifying a source area in the source image, wherein the source area has a set of coordinates, means for using the set of coordinates for the source area to, in response to said identification of said source area, identify a target area in the target image, and means for creating a digital image based on image data information from the source image, wherein image data information from the source area is seamlessly replaced with image data information from the target area.
The device according to the third and / or fourth aspect may further comprise a camera, and said source image and / or said target image may be received from said camera.
According to a fifth aspect, the present invention is realized by a computer program product for digital image manipulation. Thus, a computer program product is provided comprising software instructions which, when downloaded to a computer, are arranged to perform image processing according to the above-mentioned digital image manipulation methods.
The second, third, fourth and fifth aspects can generally have the same features and benefits as the first aspect. Other objects, features and advantages of the present invention will become apparent from the following detailed description, from the appended dependent claims, and also from the drawings.
In general, all terms used in the claims are to be construed according to their ordinary meaning in the technical field, unless they are explicitly defined herein. All references to "one (s) [element, device, component, organ, step, etc.]" are to be construed broadly as referring to at least one occurrence of the element, device, component, organ, step, etc., unless unless otherwise stated. The steps for any of the procedures described herein need not be performed in the exact order shown, unless otherwise indicated. Brief Description of the Drawings Embodiments of the present invention will now be described in more detail with reference to the accompanying drawings, in which: Figs. 1a-b are schematic illustrations of devices according to embodiments, Fig. 2 is a schematic illustration of an example according to an embodiment, Figs. 3a-5c are schematic illustrations of examples according to embodiments, Fig. 6 is a schematic illustration of a sectional method according to an embodiment, and Fig. 7 is a flow chart for a method according to embodiments.
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS The present invention will now be described in more detail below with reference to the accompanying drawings in which particular embodiments are shown. The same number is used throughout for the same element. However, the present invention may be embodied in many different forms and should not be construed as limited by the embodiments set forth herein; these embodiments are provided by way of example so that this description is detailed and complete and conveys the scope of the invention to those skilled in the art. For example, for illustrative purposes, the content is presented in a JPEG context. But the content is also applicable to other standards and formats, mutatis mitandis.
Fig. 1a is a schematic illustration of a mobile communication device 100 according to an embodiment. The device 100 may be a computer. The device 100 may be a personal digital assistant (PDA).
The device 100 may be a mobile telephone. Generally, the device comprises 100 circuits arranged to perform a number of operations and will be described in terms of functional blocks. In general, the functional blocks can be implemented in various ways, such as one or more field programmable gate arrays (FPGAs), application specific integrated circuits (AS1Cs), or the like. The device 100 comprises a processor functional block 104 which can be realized as a central processor unit and / or a specialized image processing unit, such as a hardware accelerator for JPEG. The processor may also refer to a graphics processing unit (GPU) capable of performing calculations, such as pixel / fragment renderer in OpenGL / OpenCL. The image processing unit may be implemented as a computer software product comprising one or more software components, or as a dedicated image processing hardware unit. The software components may include software instructions which, when downloaded to a computer, are arranged to execute instructions associated with the image processing unit. The device 100 further comprises a memory functional block 106, which can be realized as a memory or a computer readable storage medium, such as a direct access memory (RAM), a read only memory (ROM), a product for a universal serial bus (USB) such as a memory stick, or the like.
The device 100 further comprises a communication functional block 108, which can be realized by a receiver or a transmitter and / or a transmitter-receiver, and which is inter alia arranged (e) to receive input from, and to supply output to, a function block 110 for a human-machine interface (MM1), to another mobile communication device, a computer, or the like. The device 100 is programmed under the supervision of an operating system 102. The device 100 may further comprise an imaging unit 112, which may be realized by a camera or the like.
Alternatively, the device 100 may be operatively coupled to an external imaging unit (not shown) via the communication interface functional block 108. As will be shown below, the device may have access to sets of images from which the images may be selected. Such images may originate from a video sequence, such as a video file, or from a video surveillance camera. The external imaging unit may be connected to the device via an external network interface which may be wireless, such as a 3G modem, or a WLAN.
The memory functional block 106 may accommodate a computer program product 114 including software instructions which, when downloaded to a computer, such as the device 100, and run on the processor 104, are arranged to perform what is described herein. Alternatively, the software instructions may be separately distributable for distribution in a computer network (not shown).
Fig. 1b is a schematic illustration of an image manipulator 120 according to an embodiment. The image manipulator comprises a number of function blocks which may be implemented in the processor 104 of the device 100 in Fig. 1a.
The image manipulator 120 includes an image receiver 124 arranged to receive source images 148 and target images 150. The image receiver 124 may include a decoder 126 for decoding source and / or target images. The image manipulator 120 10 further comprises a source area identifier 128 coupled to the image receiver 124 and arranged to identify an area in the source image 148.
The source area identifier 128 may include a candidate source area identifier 130 arranged to identify a candidate source area on which the source area may be based. The image manipulator 120 further includes a target area identifier 132 coupled to the image receiver 124 and the source area identifier 128 and is arranged to identify in the target image 150 an area based on the source area.
The target area identifier 132 may include a candidate target area identifier 134 arranged to identify a candidate target area on which the target area may be based. The image manipulator 120 further includes a calculation block 136 coupled to the source area identifier 128 and the target area identifier 132 and may include a block 138 arranged to calculate a compensation movement, a block 140 arranged to calculate a section, and a block 142 arranged to calculate a detail measure. The image manipulator 120 further includes an image creator 144 coupled to the source area identifier 128 and the target area identifier 132 and arranged to create a digital image 158. The image creator 132 includes an image mixer 146 arranged to softly mix the image to be created. The source area identifier 128 is further connected to a signal receiver 154 arranged to receive user input, to a feature information block 156 which holds information related to features in the source image 148 and / or the target image 150, and to a feature condition block 152 which contains conditions related to the features.
A method of digital image manipulation will now be described with reference to the device 100 in Fig. 1a, to the image manipulator 120 in Fig. 1b and to the flow chart in Figs. 7. In general, what is described herein may allow a preliminary area of a digital image that is considered unsatisfactory, unfavorable, or undesirable, to be specified and replaced with image data that is considered satisfactory from another digital image. Similarly, what is described herein may enable a preliminary area of a digital image that is considered satisfactory, favorable, or desirable, for example, and replace image data that is considered unsatisfactory, unfavorable, or undesirable in another digital image. digital images as claimed herein may have been generated by a digital imaging unit 112, such as a digital still image camera or a digital video camera. Operations and devices used during the process of generating digital images are as such known in the art and will not be further described herein.
The method includes receiving a source image 148 and a target image 150, step SO 2. The source image and the target image may be received by the image receiver 124 in the image manipulator 120, the image receiver 124 may be the receiver in the communication interface 108 of the device 100. In the case where the device 100 includes an image capture unit 112, such as a camera, .
Alternatively, the source image and / or the target image may be received from the memory functional block 106 of the device 100. The source image and the target image may have been taken as individual picture frames. Alternatively, the source image and the target image may be derived from a common video sequence, or from two video sequences from different times, or other substantially similar video sequences.
Fig. 2a illustrates an example of a source image 200a and Fig. 2b illustrates an example of a target image 200b. The source image 200a includes a first object 202a (in the present example in the shape of a building) and a second object 204a (in the present example in the shape of a human) enclosed in a source area 206a. Similarly, the target image 200b includes a first object 202b (in the present example in the shape of a building) similar to the first object 202a in the source image 200a and a second object 204b (in the present example in the shape of a human) similar to the second object 204a in the source image. 200a and enclosed lead target area 206b. The source object 204 in the source area 206a and the target object 204b in the target area 206b may represent one and the same (real) object. In general, the object to be replaced can be determined based on the properties of the object. Such properties can be a special unique code, or color that is visible in the image. The color may be similar to the "blue screen" technologies used to "cut" parts of a video production, and the code may be a reflective marker, or a code similar to a 2-dimensional bar code.
The source image and the target image may be associated with different imaging features as provided by the feature block function block 152. The imaging feature may relate to exposure. For example, an area in the source image may be associated with a first exposure level while a corresponding area in the target image may be associated with a second exposure level that is higher or lower than the first exposure level. Thus, an "underexposed" area can be seamlessly replaced with a "correctly exposed" area. This allows high dynamic range (HDR) images to be created efficiently. The image capture feature may also be related to resolution. Resolution can be an effect of image zoom.
For example, the wedge image may have a higher resolution than the target image, or vice versa. Thus, low-resolution areas can be seamlessly replaced with high-resolution areas (while maintaining the aspect ratio of the area for the area). The shooting feature may be related to focus. For example, an area in the wedge image can be "in focus" while a corresponding area in the target image is "out of focus", or vice versa, thus an area that is "out of focus" can be seamlessly replaced with an area that is "in focus". For example, the level of blur in an area of the wedge image may be lower than the level of blur in a corresponding area of the target image, or vice versa, thus an area of blur can be seamlessly replaced with an area of less blur.
The shooting feature may be related to flash levels. For example, the wedge image may have been taken with a flash level that is higher than the flash level used to capture the target image, or vice versa. A high flash level may be desirable for some areas of the scene to be captured, but it may also result in other areas of the image being overexposed. Overexposed areas can thus be seamlessly replaced by corresponding areas from another image taken with a lower flash level. the lower flash level may correspond to no flash being used. The shooting feature may thus be related to the "bracketing" of at least one of the exposure, zoom, focus, sharpness, and flash level. Thus, an image can be created based on areas from a source image and a target image associated with different of exposure, zoom, focus, sharpness, and / or flash level, all areas being associated with desired exposure, zoom, focus, sharpness, and / or flash level . The imaging feature may be related to a perceived quality of the area determined by parameters such as the percentage of smiles, or open eyes, see below.
The source image can be selected from several possible source images. To select the source image from the plurality of possible source images, each of the plurality of source images may be associated with a distortion measure. The source image can then be selected from the majority of source images as the image that has the smallest distortion measure. For example, the image can be selected from most source images as the image with the least blur. In general, the distortion measure can relate to any of the imaging features described above. 10 15 20 25 30 35 10 The source image can also be selected as the image that is closest to the target image with respect to the distortion measure, see below.
The method includes identifying a source area 206a in the source image 200a, the source area 206a having a set of coordinates, step SO4.
The source area can be identified by the source area identifier 128. The set of coordinates defines the position of the source area 206a in the source image 200a.
The set of coordinates can also define the area (in spatial terms) of the source area in the source image. For example, the set of coordinates may include coordinates that define the positions of corners and / or sides of a rectangle in the source image, the set of coordinates may include coordinates that define the area and position of a circle in the source image, and the like.
The source area may be based on a received signal, such as received by the signal receiver 154, which identifies the source area. the received signal can be generated by user input via MMI 110 in device 100. Thus, a user can be allowed to manually identify and select an area or object (see below) in the source image via MMI 110. Typically, the source image is displayed on a display of MMI 110. In the case since the MMI 110 includes a motion-sensitive display, the user can identify the source area by drawing a line enclosing the source area, or otherwise marking its coordinates. MMI 110 can also provide a tool to support the identification of the source area. The tool can include predetermined geometric objects (such as rectangles, squares, circles and ellipses) which, via user input, can be used to identify the source area.
As will be further described below, the source area can also be identified by the use of feature recognition. The feature recognition process may use feature information provided by the feature information functional block 156. The use of such feature recognition may eliminate, or at least reduce, the need to receive user input to identify the source area. Thus, the method may comprise receiving feature information related to a feature in at least one of the source image and the target image, step S14.
US2009190803A discloses a method and system for detecting and tracking facial expressions in digital images and applications therefor. According to US2009190803A, an analysis of the image determines whether there is a smile and / or a wink on a human face in the image. Thus, the feature recognition may be related to the recognition of a human face. The feature recognition may be related to the recognition of a human mouth. The feature recognition may be related to the recognition of a human eye. The feature recognition can also be related to the recognition of a person's facial expression.
Either the source image or the target image, or both the source image and the target image include the feature. The source area can then be identified by using the feature, step S16.
The method further comprises using the coordinates of the source area 206a (in the source image 200a) to, in response to the identification of the source area 206a, identify a target area 206b in the target image 200b, step S06.
The target area can be identified by the target area identifier 132. In other words, the geometry of the target area 206 is based on the geometry of the source area 206a.
In particular, the set of coordinates can be used such that the set of coordinates for the target area in the target image corresponds to the set of coordinates for the source area. The set of coordinates of the target area 206b defines the position of the target area 206b in the target image 200b. The set of coordinates of the target area 206b may further define the area (in spatial terms) of the target area 206b in the target image 200b.
Image data information in the source image and image data information in the target image can represent essentially the same scene. In other words, a measure of distortion between the source image and the target image may be required to be less than a predetermined value. The distortion measure can be related to the energy between the source image and the target image, see below. The distortion measure can use movements, such as translation and / or rotation, the amount of movements being limited by predetermined threshold values. One way to reduce the risk that the distortion measure is higher than the predetermined value is to require that the source image and the target image be received within a predetermined time interval, and / or within an interval for total movement within the stage. It may also be required that the source image and the target image have been taken within a predetermined time interval. To determine whether the source image and the target image have been taken within a predetermined time interval, the device 100 may include timing circuits arranged to measure the time that passes between receiving the source image and receiving the target image. Alternatively, the source image and the target image may be associated with timestamps.
Motion compensation: When two or more pictures are taken with a camera, the camera may have moved in relation to the captured scene and / or objects in the scene may have moved in relation to each other and / or the camera between individual shots. Therefore, the source image data 10 to replace the specified target image data (or vice versa) can be translated and / or rotated by using the compensation motion calculator 138 to compensate for movements of the imaging unit between the source image capture and the target image. this translation can inter alia be achieved by minimizing a malfunction regarding the square difference calculated per pixel between intensities in the source image and in the target image along a preliminary boundary. In general terms, the relative motion of the imaging unit can be interpreted as an arbitrary projective transformation. The movement can thus relate to at least one of the translation, rotation, or projection of at least a part of the source image in relation to the target image.
When images taken within a predetermined time interval are taken into account, the movement can in many cases be approximated with a simple translation and / or rotation of one image relative to another. For example, it can be assumed that the motion adds a normally distributed offset to the pixels in the image. A sampling procedure can be used to reduce the number of evaluations of the malfunction. The step of identifying the source area may thus comprise determining a compensation movement between at least a part of the source image and at least a part of the target image. The compensation movement can be selected so that the selected compensation movement minimizes a difference between the source image and the target image, step S18.
Correspondence between pixels in the source image and in the target image can be accomplished by inter alia applying Harris' corner detection process to both images and using a statistical algorithm, such as RANSAC, to determine a correspondence map. Alternatively, invariant feature correlations between the cell image and the target image can be determined. These correspondences can then be used to modify the source image or the target image by using, inter alia, a thin plate spinning (TPS) method. TPS refers to the physical analogy of fixing points on a surface and allowing an imaginary thin plate of metal to deform it in a way that reduces its internal energy. The offset of the imaginary metal plate can then be used as a coordinate image to distort the source image or the target image.
To find a suitable section (see below), it may be advantageous to align the source image and the target image along a preliminary section boundary. This can be accomplished by minimizing a malfunction, for example, by minimizing E (r) = Z, in .m [Isf (v + r) - lf, - (v)] 2 along the preliminary section dQ over 10 A suitably large area S. Is, - and lf, - denote the intensities of the source image and the target image, respectively. For a color image, the intensities can be calculated according to I = (R + G + B) / 3, where R, G, and B are the respective color channels red, green, and blue. Alternatively, a luminance channel can be used for a YUV image. The size of the area S may depend on the proportion of movement between the source image and the target image. It can also depend on the time that has elapsed between the taking of the source image and the target image. There may be different approaches to reducing malfunction, for example by using convolution and fast Fourier transforms. A convolution-based procedure can be fast to calculate the entire malfunction. It has been discovered that since each term in the sum in the malfunction is positive, it would follow that when searching for a global minimum, a point whose partial sum exceeds the value of the current local minimum cannot be a global minimum. It would therefore also follow that if a global minimum is found early, more sums can be terminated in advance and thus calculations can be saved. It may therefore be advantageous to calculate the malfunction over a few points in a small area around zero translation and then extend the search outwards. This can be justified by the above-mentioned assumption that the translational movement can be considered to be normally distributed.
Due to the nature of the translational movement, the malfunction can be considered even (and not completely chaotic). The malfunction can be sampled at intervals that are initially comparatively large. However, if the initial sampling distance d is too large, the optimal solution may be incorrectly skipped. The sampling density can then be increased while areas of adjacent samples that have the relatively highest errors are excluded from further calculations. More specifically, the initial sample distance can be selected as a two power and halved with each new iteration.
For such a case, for areas not already abandoned, at each iteration except the first one of four samples is already calculated. At the end of each iteration, the environment is abandoned to the worst three-quarters of the samples. For a square image where the number of pixels along each side of the image is a two power, the number of samples calculated is n / dz (1 + 3/4 log2 (d)). Additional calculations can be used when the best quarter of the sample is to be determined. but since it is comparatively expensive to evaluate each sample, these additional calculations can be disregarded when the entire computational effort required is utilized.
"Saliency": "Saliency" can be considered as a measure of the visibility (or clarity) of a detail or part of an image in relation to adjacent parts or details of the image. "Saliency" can be considered as a way of representing details (inter alia corresponding information content or entropy) in an image. "Saliency" can be used to determine the cut (see below) to steer the cut away from areas of the source image and / or target image that contain such details. "Saliency" can also be used to measure the proportion of eye-catching features introduced due to irregularities in the cut or the soft mixture. To facilitate the use of a detail measure, such as "Saliency", the source area may be associated with a candidate source area (as identified by the candidate source area identifier 130), similarly, the target area may be associated with a candidate target area (as identified by the candidate target area identifier 134).
The candidate source area and the candidate target area can be defined along the outer boundary of the source area and the target area, respectively, as further described with reference to the calculation of the boundary and section.
The method may thus further comprise determining a detailed measure between a candidate source area in the source image and a candidate target area in the target image, step S20. The source area and the target area can then be identified based on the determined detail measure as calculated by the detail measure calculator 142.
A method for calculating "Salinity" for an image is described in the article "A model of salinity-based visual attention for rapid scene analysis" in IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 20, no. 11, p. 1254-1259, November 1998, by ltti et al. Briefly, the method will transform an RGB image into maps that are more representative of the way the human visual system handles image data. It will then perform a series of folds on these maps. The result is normalized by using an inhibitory function and then summed to create a final "Saliency" map. This map can thus provide an indication of the spatial distribution of "Saliency" in the image.
Interface: A suitable interface within which the source image is to be inserted can be determined. The specific interface represents an incision. The section can thus define where the target image and the source image meet and can be calculated by the section calculator 140. For illustrative purposes, the source image can be considered placed on top of the target image, the source image (or target image) having been translated as previously described. Internal and external boundaries defining an area (corresponding to the area S described above) within which the (desired) interface is limited can be specified. Fig. 6a illustrates an interface 600 having an inner boundary 602 and an outer boundary 604 and a desired boundary 606. A malfunction which assumes the square difference calculated per pixel between the source image and in the target image for pixels within the area is defined. Fig. 6b is a corridor graph representation 600 'of the interface 600 in Fig. 6b, the interface 600 being opened along the intersection A-A' - B-B '. The section can be determined by assigning each pixel in the area S a cost and then finding a closed path in the area S that minimizes the cost. An optimal solution is given in the article "Shortest circular paths on planar graphs" in the 27th Symposium on Information Theory in the Benelux, vol. Pp. 117-124, June 2006, Noordwijk, Holland by Farin et al. When pixels are represented by a graph, such as the corridor graph 600, the graph has a trellis structure.
The graph associated with the S clan area is thus represented by a trellis and therefore dynamic programming can be used instead of Dijkstra's algorithm. Thus, a tail biting path (i.e., a path in the corridor graph 600 'beginning and ending in pixels that provides an enclosing constraint 606 in the corresponding interface 600) with a minimum error for all paths or with an error below a predetermined threshold can then be found within the range. A. The section thus found defines the cropping of the source image and the target image. An approximate solution, which is both faster and easier to implement, can be found through an iteration of dynamic programming. Each node in the graph was the shortest path that crossed it started from and thus a closed path can be easily found when the iteration is complete. Thus, in a step S22, the method may include identifying the source area and the target area by calculating a section using the candidate source area and the candidate target area. The section can define a boundary enclosing the source area, and the section can be determined such that the detail dimension is minimized.
The method further comprises, in a step S08, creating a digital image. The digital image can be based on image data information from the target image, whereby image data information from the target area is seamlessly replaced with image data information from the source area. Correspondingly, the digital image can be based on image data information from the source image, whereby image data information from the source area is seamlessly replaced with image data information from the target area. The image can be created by the image maker 144. Fig. 2c illustrates an example of an image 200c based on image data information from the source image 200a and the target image 200b in Fig. 2a and Fig. 2b, respectively. Comparison with the source image 200a and the target image 200b, the created image 200c includes the first object 202b in the target image 200b and the second object 204a in the source image 200a. Thus, in the created image 200c, a target area corresponding to the second object 204b in the target image 200b has been replaced with a source area corresponding to the second object 204a in the source image 200b.
Feature recognition can be used to determine whether one of the source area and the target area meets a condition related to the feature information, step S24. The digital image can then be created based on this determination, step S26. For example, in a case where the source area meets the condition, the image data information in the target area can be seamlessly replaced with the image data information in the source area. For example, in a case where the target area meets the condition, the image data information in the source area can be seamlessly replaced with the image data information in the target area.
In particular, if the feature information is related to a facial expression and one of the source image and the target image is classified to include a smiling face, the image is created to include the smiling face. To achieve this, face recognition can be used. Once a face has been detected, smile detection can be used by detecting the lips in the face detected therewith and detecting the lips, inter alia depending on its curvature in at least two categories, such as smiling lips and non-smiling lips. A similar classification can be performed to detect blinking eyes or red eyes (inter alia caused by flash effects during the taking of the image including the red eyes. Thus, the method may further comprise identifying a facial expression to be used when creating the digital image. "faces" are replaced by "happy" faces, with the "sad" faces being associated with non-smiling lips and the "happy" faces being associated with smiling lips.
Soft blend: A gradient-like soft blend can be applied to ensure that the insertion of the source area into the target image (or vice versa) is seamless. In more detail, the interface of the source image should be at least approximately equal to the interface of the target image when a source image (or an area therein) is to be mixed in with a target image (or vice versa). This may require that the source image and / or the target image be manipulated in some way, for example by the image mixer 146. In order to be as visually undetectable as possible, the manipulation should preferably only introduce gradual changes to the interior of the source image.
An example that achieves the desired effect is Poisson blend. More specifically, the gradient field of the source image can be modified in such a way that the source image is selected as the source image having a gradient field that is closest (in Lg norm terms) to the target image, under side conditions of some limitation conditions. Therefore, Poisson blend is also known as gradient domain blend. The harmonic membrane calculated in Poisson mixture can be approximated using mean mixture. The membrane is very soft away from the interface of the cloned area. Thus, very sparse computational density can be used away from the interface. Instead, linear interpolation can be used for the remaining pixels. As will be described next, one way to accomplish this is to use a so-called "quad-tree" structure. Once this structure is created, it can be reused for all three color channels, thus the method may further comprise soft mixing of the source image and the target image using quad-tree optimization, step S28.
A node in the tree has the shape of a square and is called an s-node when it has the side length / = 2s. Each node has 4 children of the type (s-1) nodes and thus creates a "quad-tree". The smallest node size is then the O-node with the side length l = 1. To ensure that the node size does not increase too fast, a restriction that two adjacent nodes must have a size difference of | As | > 1 is added. In addition, the nodes on the interface are defined as 0-nodes and thus create a basis for developing the tree structure.
A memory map can first be initialized by filling it with moles (representing O-nodes) inside the area Q to be mixed and with -1 outside the area. It is then iterated over this map with step length l ,, = 2 "in both height and width where n = 1, 2, For each visited point, adjacent points are compared with the current iteration (n). If all adjacent points are at least (n - 1 ) -nodes and all are inside Q so the current point is advanced to an n-node.This process can then be repeated for all n.
A reduction in computational claims can be achieved by sampling the interface instead of using all the interface pixels.
The mean value weight for each interface node decreases rapidly with distance.
Thus, an accurate approximation of the area to be mixed can be achieved by sampling the interface with a density that is inversely proportional to the distance. For example, only a constant number of interface nodes can be used when calculating the coordinates and the area to be mixed at each quad node. Such sampling should be based on decoding the desired region in one or more smaller scales. Such decoding can be performed quickly and accurately in image formats such as JPEG. 10 15 20 25 30 35 18 After an optimal cut has been determined, local discontinuities can be attenuated (before soft mixing) using the thin plate spline procedure described above after determining the correspondence between the source image and the target image along the cut. on the source interface, a point on the target interface can be determined that minimizes the square difference calculated per pixel along a line segment surrounding the points, this can be seen as a form of one-dimensional feature matching.
To overcome problems with delays, ie. that compressed full-size images take too long to process, and storage capacity, ie. that uncompressed images take up too much image area, the source and / or target image can be analyzed and features that facilitate rapid manipulation of an image are stored in (respectively) image file (s), temporarily stored in the memory of the current manipulation, or stored as an entry in a database where the entry in the database can refer to the image file. A series of methods for analyzing, extracting and storing such features related to an image are described in patent application WO 2005/050567 by Scalado AB.
To extract features that facilitate rapid manipulation of an image, the features can either be extracted upon compression of the source and / or target image or features extracted after post-compression analysis of a compressed image, such as by the image decoder 126. In the case of source and / or or the target image has been compressed with JPEG compression, or a similar compression method, the features that facilitate rapid manipulation of a stored or received image may be one of or a combination of indicators for minimum coded units (MCUs), where an MCU is a small image block in an image, indicators for one or more data units, "data units" (DU), where a data unit is a data block that represents a color channel or color component of the MCU, one or fl your absolute or relative DC coefficients for one or more of the color components of the received MCUs and / or received data units, or the number of bits between data units, or between n specific coefficients for the data units. Since features need to be extracted and / or analyzed at different scales, such techniques can be used to efficiently perform such extraction. How such features can be used to effect rapid manipulation of an image is described in the above application, i.e. WO 2005/050567 by Scalado AB.
Thus, in case the source image has been encoded, the method may further comprise in the source image only decoding image data information representing the source area, and / or decoding such areas in the required scale, step S30, by using the above-described features of to facilitate rapid manipulation of an image. Similarly, if the source image has been encoded, the method may further comprise decoding in the target image only image data information representing the target area, step S32, by using the above-described features to facilitate rapid manipulation of an image. The final coded new image can also be made up of source / target images with parts replaced by new image data and thus facilitate the recovery of at least parts of the compressed source-target images.
The length of a data unit can be defined as the number of bits needed to represent the data unit. When compressing MCUs and DUs, it is common to use variable length coding, such as Huffman coding, which results in the data units having different DU lengths depending on the degree of information thus represented. Thus, a data unit representing a high level of information (corresponding to a high level of information content in the corresponding image block) may have a longer DU length than a data unit representing a level of information lower than said high level of information (corresponding to a level of information content in corresponding image blocks lower than said high level of information content). Thus, the above-mentioned detail measure can be based on DU lengths for data units in the candidate source area and the candidate target area. Since MCUs consist of one or more DUs, the measure may also be based on an MCU or a plurality of DUs or MCUs. The method may further comprise, in a step S34, replacing one of the target image and the source image with the created the digital image, thereby reducing the memory requirements for storing images.
Next, typical scenarios where the described content can be applied will be described.
Example 1: A first example, illustrated in Figs. 3a, 3b, 3c and 3d, relates to a case in which it may be desirable to replace non-stationary objects in an image. When an image comprising one or more non-stationary objects, such as humans, is to be taken, it is not uncommon for (parts of) the non-stationary objects to move between the images. This can result in images where the capture of the non-stationary object a is seen as unsatisfactory, unfavorable, or undesirable, such as an image in which a human blinks an eye, or the like. In this case, one possibility would be to take a different picture of the same scene and hope that the result will be more satisfactory, favorable, or desired. However, when a second image is taken, other objects may be considered unsatisfactory, unfavorable, or unwanted. Thus, none of the images taken, when judged individually, can be considered satisfactory, favorable, or desirable. In this case, it would be desirable to replace only certain areas of the first image with the more satisfactory, favorable, or desired imagery of the same scene. For example, in a first shot 300a of a group photo, the appearance of a first person 302a may be considered unsatisfactory (inter alia due to a blink of an eye) while the appearance of a second person 304a may be considered satisfactory and the appearance of a third person 306a may is considered unsatisfactory (inter alia due to the blink of an eye). In a second shot 300b of the same group photo, the appearance of the first person 302b may be considered satisfactory while the appearance of the second person 304b may be considered unsatisfactory and the appearance of the third person 306b may be considered unsatisfactory. In a third shot 300c of the same group photo, the appearance of the first person 302c and the second person 304c may be considered unsatisfactory while the appearance of the third person 306c may be considered satisfactory.
By using the methods described herein, it is possible in the first image 300a to replace image data information representing the first person 302a with image data information representing the first person 302b from the second image 300b, and to replace image data information representing the third person 306a with image data information represents the third person 3060 from the third image 300c, thus resulting in a satisfactory image 300d. Similarly, image 300d can be based on either image 300b or image 3000.
Thus, the method may generally comprise receiving at least one additional source and / or target image, in a step S10, and reviewing the target image, the source image and the at least one additional source and / or target image, in a step S12, and creating a image based on the source image, the target image, and the at least one additional source and / or target image.
Example 2: A second example, illustrated in Figs. 4a, 4b, and 4c, relates to a case in which it may be desirable to remove non-stationary objects from an image. In a first image 400a, for example, a non-stationary object 404a in a first area 406a may obstruct the view of a desired object 402 to be captured, while in a second image 400b, a non-stationary object 404b in a second area 406b may obstruct the view of a desired object 402 to be captured.
Using the methods described herein, in the first image 400a, it is possible to replace image data information representing the non-stationary object 404a in the first region 406a with image data information representing the first region 406a from the second image 400b, which thus resulting in a satisfactory image 400c. Similarly, by using the methods described herein, in the second image 400b, it is possible to replace image data information representing the non-stationary object 404b in the second region 406b with image data information representing the second region 406b from the first image 400a, thus results in a satisfactory image 400c.
Example 3: A third example, illustrated in Figs. 5a, 5b, and 5c, relates to a case in which it may be desirable to add non-stationary objects to an image. In a first image 500a, for example, a non-stationary object 504 may be located at a first set of coordinates, while the same non-stationary object 506 may be located at a second set of coordinates in a second image 500b. Using the methods described herein, it is possible to seamlessly insert a copy of the non-stationary object 504 located at the first set of coordinates in the second image 500b and / or to insert a copy of the non-stationary object 506 located at the second the set of coordinates in the first image 500a, thus resulting in the composite image 500c.
The invention has been described above mainly with reference to specific examples. However, as will be appreciated by those skilled in the art, examples other than those described above are possible within the scope of the invention as defined by the appended claims.
权利要求:
Claims (1)
[1]
1. 0 15 20 25 30 22 PATENT REQUIREMENTS. Digital image manipulation method comprising: receiving a source image and a target image, identifying a source area in the source image, wherein the source area has a set of coordinates, using the set of coordinates for the source area to, in response to said identifying said source area, identifying a target area in the target image, and creating a digital image based on image data information from the target image, whereby image data information from the target area is seamlessly replaced with image data information from the source area. . Digital image manipulation method comprising: receiving a source image and a target image, identifying a source area in the source image, wherein the source area has a set of coordinates, using the set of coordinates for the source area to, in response to said identifying said source area, identifying a target area in the target image, and the creation of a digital image based on image data information from the source image, whereby image data information from the source area is seamlessly replaced with image data information from the target area. . The method of claim 1 or 2, further comprising receiving feature information regarding a feature in at least one of said source image and said target image, wherein at least one of said source image and said target image comprises said feature, and identifying said source area based on said feature. . The method of claim 3, further comprising - determining whether one of said source area and said target area satisfies a condition based on said feature information, and - creating said digital image based on said determination. The method of claim 4, wherein said condition is related to feature recognition of at least one of the features of the group a human, a human face, a human mouth and a human eye. A method according to any one of the preceding claims, wherein the source image is selected from a plurality of possible source images. The method of claim 6, wherein each of said plurality of possible source images is associated with a distortion measure, and wherein the source image is selected from the plurality of possible source images as the image having the smallest distortion measure. A method according to any one of the preceding claims, wherein the source image and the target image are received within a predetermined time interval. A method according to any one of the preceding claims, wherein the source image and the target image are associated with different imaging features. The method of claim 9, wherein the imaging features are related to at least one of exposure, resolution, focus, blur and flash level. A method according to any one of the preceding claims, wherein the set of coordinates for the source area is used so that a set of coordinates for the target area in the target image corresponds to the set of coordinates for the source area. A method according to any one of the preceding claims, wherein image data information for the source image and image data information for the target image represent substantially the same scene. A method according to any one of the preceding claims, wherein identifying said source area further comprises - determining a compensating motion between at least a portion of said source image and at least a portion of said target image such that said compensating motion minimizes a difference between the source image and the target image. The method of claim 13, wherein said difference is determined for at least a portion of said target image and is related to at least one of translation, rotation, or projection of at least a portion of the source image in relation to said target image. A method according to claim 12 or 13, further comprising - using a "thin plate spline" method to determine said compensation movement A method according to any one of the preceding claims, further comprising - determining a detail measure between a candidate source area in said source image. and a candidate target area in said target area wherein said source area and said target area are identified by the determined detail measure. The method of claim 16, further comprising - determining the source area and the target area by calculating a section using the candidate source area and the candidate target area, said section defining a boundary enclosing the source area, and wherein said section is determined so as to minimize said detail size. A method according to claim 16 or 17, wherein said detail size is based on data unit lengths for data units in said candidate source area and said candidate unit area. candidate target area. A method according to any one of the preceding claims, further comprising soft mixing between the source image and the target image by using quad-tree optimization A method according to any one of the preceding claims, further comprising - replacing one of the target image and the source image with the created one. A method according to any one of the preceding claims, wherein the source image and the target image have been coded, the method further comprising - encoding in the source image only image data information representing the source area, and - decoding in the target image only image data information representing the target area. according to any one of the preceding claims, wherein the source area corresponds to a source object, wherein the target area corresponds to a target object, and wherein the source object and the target object represent one and the same object 23. A method according to any one of the preceding claims, wherein the source image and the target image are derived from a video sequence. according to any one of the preceding claims, further incl atting - receiving a signal identifying said source area. Device for digital image manipulation comprising: - means for receiving a source image and a target image, - means for identifying a source area in the source image, wherein the source area has a set of coordinates, means for using the set of coordinates for the source area for in response to said identification of said source area, identifying a target area in the target image, and means for creating a digital image based on image data information from the target image, wherein image data information from the target area is seamlessly replaced with image data information from the source area. An apparatus for digital image manipulation comprising: means for receiving a source image and a target image, means for identifying a source area in the source image, the source area having a set of coordinates, means for using the set of coordinates for the source area to, in response to said identification of said source area, identifying a target area in the target image, and means for creating a digital image based on image data information from the source image, wherein image data information from the source area is seamlessly replaced with image data information from the target area. The device of claim 25 or 26, further comprising a camera, and wherein said source image and said target image are received from said camera. A computer program product comprising software instructions which, when downloaded to a computer, are adapted to perform a method according to any one of claims 1 to 24.
类似技术:
公开号 | 公开日 | 专利标题
SE1000142A1|2011-08-16|Digital image manipulation including identification of a target area in a target image and seamless replacement of image information from a source image
US20190102878A1|2019-04-04|Method and apparatus for analyzing medical image
JP6154075B2|2017-06-28|Object detection and segmentation method, apparatus, and computer program product
CN106899781B|2020-11-10|Image processing method and electronic equipment
EP2947627B1|2017-11-15|Light field image depth estimation
CN107516319B|2020-03-10|High-precision simple interactive matting method, storage device and terminal
US9600898B2|2017-03-21|Method and apparatus for separating foreground image, and computer-readable recording medium
EP3457683B1|2020-07-22|Dynamic generation of image of a scene based on removal of undesired object present in the scene
US20170147609A1|2017-05-25|Method for analyzing and searching 3d models
KR20150126768A|2015-11-13|Method for composing image and electronic device thereof
US10764563B2|2020-09-01|3D enhanced image correction
US20160191898A1|2016-06-30|Image Processing Method and Electronic Device
US20180068451A1|2018-03-08|Systems and methods for creating a cinemagraph
Li et al.2017|Optimal seamline detection in dynamic scenes via graph cuts for image mosaicking
Ferreira et al.2016|Fast and accurate micro lenses depth maps for multi-focus light field cameras
CN108596923B|2020-10-16|Three-dimensional data acquisition method and device and electronic equipment
KR20140138046A|2014-12-03|Method and device for processing a picture
KR101566459B1|2015-11-05|Concave surface modeling in image-based visual hull
Hackl et al.2018|Diminishing reality
Zhang et al.2020|Light field salient object detection via hybrid priors
Mizuguchi et al.2018|Basic Study on Creating VR Exhibition Content Archived Under Adverse Conditions
CN111382647B|2021-07-30|Picture processing method, device, equipment and storage medium
Bui et al.2017|Multi-focus application in mobile phone
Yusiong et al.2020|Unsupervised monocular depth estimation of driving scenes using siamese convolutional LSTM networks
WO2015162027A2|2015-10-29|Method, device, user equipment and computer program for object extraction from multimedia content
同族专利:
公开号 | 公开日
US20140177975A1|2014-06-26|
US9196069B2|2015-11-24|
US20110200259A1|2011-08-18|
US20140101590A1|2014-04-10|
SE534551C2|2011-10-04|
EP3104331A1|2016-12-14|
EP2360644A3|2015-09-09|
EP2360644A2|2011-08-24|
EP3104332A1|2016-12-14|
US8594460B2|2013-11-26|
US9396569B2|2016-07-19|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

JP3036439B2|1995-10-18|2000-04-24|富士ゼロックス株式会社|Image processing apparatus and image attribute adjustment method|
US6985172B1|1995-12-01|2006-01-10|Southwest Research Institute|Model-based incident detection system with motion classification|
US6075905A|1996-07-17|2000-06-13|Sarnoff Corporation|Method and apparatus for mosaic image construction|
US6621524B1|1997-01-10|2003-09-16|Casio Computer Co., Ltd.|Image pickup apparatus and method for processing images obtained by means of same|
US6542645B1|1997-07-15|2003-04-01|Silverbrook Research Pty Ltd|Adaptive tracking of dots in optical storage system using ink dots|
KR100595920B1|1998-01-26|2006-07-05|웨인 웨스터만|Method and apparatus for integrating manual input|
JP3695119B2|1998-03-05|2005-09-14|株式会社日立製作所|Image synthesizing apparatus and recording medium storing program for realizing image synthesizing method|
WO1999067949A1|1998-06-22|1999-12-29|Fuji Photo Film Co., Ltd.|Imaging device and method|
US6317141B1|1998-12-31|2001-11-13|Flashpoint Technology, Inc.|Method and apparatus for editing heterogeneous media objects in a digital imaging device|
US6927874B1|1999-04-02|2005-08-09|Canon Kabushiki Kaisha|Image processing method, apparatus and storage medium therefor|
US6778211B1|1999-04-08|2004-08-17|Ipix Corp.|Method and apparatus for providing virtual processing effects for wide-angle video images|
US6415051B1|1999-06-24|2002-07-02|Geometrix, Inc.|Generating 3-D models using a manually operated structured light source|
WO2001059709A1|2000-02-11|2001-08-16|Make May Toon, Corp.|Internet-based method and apparatus for generating caricatures|
JP4126640B2|2000-03-08|2008-07-30|富士フイルム株式会社|Electronic camera|
US6959120B1|2000-10-27|2005-10-25|Microsoft Corporation|Rebinning methods and arrangements for use in compressing image-based rendering data|
US7099510B2|2000-11-29|2006-08-29|Hewlett-Packard Development Company, L.P.|Method and system for object detection in digital images|
US6975352B2|2000-12-18|2005-12-13|Xerox Corporation|Apparatus and method for capturing a composite digital image with regions of varied focus and magnification|
SE518050C2|2000-12-22|2002-08-20|Afsenius Sven Aake|Camera that combines sharply focused parts from various exposures to a final image|
US7162080B2|2001-02-23|2007-01-09|Zoran Corporation|Graphic image re-encoding and distribution system and method|
US7133069B2|2001-03-16|2006-11-07|Vision Robotics, Inc.|System and method to increase effective dynamic range of image sensors|
US6930718B2|2001-07-17|2005-08-16|Eastman Kodak Company|Revised recapture camera and method|
US7298412B2|2001-09-18|2007-11-20|Ricoh Company, Limited|Image pickup device, automatic focusing method, automatic exposure method, electronic flash control method and computer program|
US6724386B2|2001-10-23|2004-04-20|Sony Corporation|System and process for geometry replacement|
US7573509B2|2002-01-30|2009-08-11|Ricoh Company, Ltd.|Digital still camera, reproduction device, and image processor|
US20030189647A1|2002-04-05|2003-10-09|Kang Beng Hong Alex|Method of taking pictures|
US20030190090A1|2002-04-09|2003-10-09|Beeman Edward S.|System and method for digital-image enhancement|
CN1312908C|2002-04-17|2007-04-25|精工爱普生株式会社|Digital camera|
AU2003214899A1|2003-01-24|2004-08-23|Micoy Corporation|Stereoscopic Panoramic Image Capture Device|
US6856705B2|2003-02-25|2005-02-15|Microsoft Corporation|Image blending by guided interpolation|
US7672538B2|2003-04-17|2010-03-02|Seiko Epson Corporation|Generation of still image from a plurality of frame images|
US20040223649A1|2003-05-07|2004-11-11|Eastman Kodak Company|Composite imaging method and system|
US7317479B2|2003-11-08|2008-01-08|Hewlett-Packard Development Company, L.P.|Automated zoom control|
JP4949037B2|2003-11-18|2012-06-06|スカラド、アクチボラグ|Method and image representation format for processing digital images|
US7743348B2|2004-06-30|2010-06-22|Microsoft Corporation|Using physical objects to adjust attributes of an interactive display application|
EP1613060A1|2004-07-02|2006-01-04|Sony Ericsson Mobile Communications AB|Capturing a sequence of images|
JP2006040050A|2004-07-28|2006-02-09|Olympus Corp|Reproduction device, camera and display switching method for reproduction device|
JP4293089B2|2004-08-05|2009-07-08|ソニー株式会社|Imaging apparatus, imaging control method, and program|
JP4455219B2|2004-08-18|2010-04-21|キヤノン株式会社|Image processing apparatus, image display method, program, and storage medium|
JP4477968B2|2004-08-30|2010-06-09|Hoya株式会社|Digital camera|
TWI246031B|2004-09-17|2005-12-21|Ulead Systems Inc|System and method for synthesizing multi-exposed image|
US9621749B2|2005-06-02|2017-04-11|Invention Science Fund I, Llc|Capturing selected image objects|
US7595823B2|2005-02-17|2009-09-29|Hewlett-Packard Development Company, L.P.|Providing optimized digital images|
US7659923B1|2005-06-24|2010-02-09|David Alan Johnson|Elimination of blink-related closed eyes in portrait photography|
AU2005203074A1|2005-07-14|2007-02-01|Canon Information Systems Research Australia Pty Ltd|Image browser|
US20070024721A1|2005-07-29|2007-02-01|Rogers Sean S|Compensating for improperly exposed areas in digital images|
JP4288612B2|2005-09-14|2009-07-01|ソニー株式会社|Image processing apparatus and method, and program|
US7483061B2|2005-09-26|2009-01-27|Eastman Kodak Company|Image and audio capture with mode selection|
US9270976B2|2005-11-02|2016-02-23|Exelis Inc.|Multi-user stereoscopic 3-D panoramic vision system and method|
US20090295830A1|2005-12-07|2009-12-03|3Dlabs Inc., Ltd.|User interface for inspection of photographs|
JP4790446B2|2006-03-01|2011-10-12|三菱電機株式会社|Moving picture decoding apparatus and moving picture encoding apparatus|
US7787664B2|2006-03-29|2010-08-31|Eastman Kodak Company|Recomposing photographs from multiple frames|
WO2007114363A1|2006-03-31|2007-10-11|Nikon Corporation|Image processing method|
US8564543B2|2006-09-11|2013-10-22|Apple Inc.|Media player with imaged based browsing|
KR101259105B1|2006-09-29|2013-04-26|엘지전자 주식회사|Controller and Method for generation of key code on controller thereof|
US8111941B2|2006-11-22|2012-02-07|Nik Software, Inc.|Method for dynamic range editing|
US7839422B2|2006-12-13|2010-11-23|Adobe Systems Incorporated|Gradient-domain compositing|
US7809212B2|2006-12-20|2010-10-05|Hantro Products Oy|Digital mosaic image construction|
US7956847B2|2007-01-05|2011-06-07|Apple Inc.|Gestures for controlling, manipulating, and editing of media files using touch sensitive devices|
JP4853320B2|2007-02-15|2012-01-11|ソニー株式会社|Image processing apparatus and image processing method|
US7729602B2|2007-03-09|2010-06-01|Eastman Kodak Company|Camera using multiple lenses and image sensors operable in a default imaging mode|
US7859588B2|2007-03-09|2010-12-28|Eastman Kodak Company|Method and apparatus for operating a dual lens camera to augment an image|
JP2009020144A|2007-07-10|2009-01-29|Brother Ind Ltd|Image display device and image display program|
JP4930302B2|2007-09-14|2012-05-16|ソニー株式会社|Imaging apparatus, control method thereof, and program|
JP2009124206A|2007-11-12|2009-06-04|Mega Chips Corp|Multimedia composing data generation device|
US8416198B2|2007-12-03|2013-04-09|Apple Inc.|Multi-dimensional scroll wheel|
US8494306B2|2007-12-13|2013-07-23|Samsung Electronics Co., Ltd.|Method and an apparatus for creating a combined image|
US8750578B2|2008-01-29|2014-06-10|DigitalOptics Corporation Europe Limited|Detecting facial expressions in digital images|
JP4492724B2|2008-03-25|2010-06-30|ソニー株式会社|Image processing apparatus, image processing method, and program|
US20090244301A1|2008-04-01|2009-10-01|Border John N|Controlling multiple-image capture|
US8891955B2|2008-04-04|2014-11-18|Whitham Holdings, Llc|Digital camera with high dynamic range mode of operation|
US20090303338A1|2008-06-06|2009-12-10|Texas Instruments Incorporated|Detailed display of portion of interest of areas represented by image frames of a video signal|
US8497920B2|2008-06-11|2013-07-30|Nokia Corporation|Method, apparatus, and computer program product for presenting burst images|
JP4513903B2|2008-06-25|2010-07-28|ソニー株式会社|Image processing apparatus and image processing method|
US8768070B2|2008-06-27|2014-07-01|Nokia Corporation|Method, apparatus and computer program product for providing image modification|
US8463020B1|2008-07-08|2013-06-11|Imove, Inc.|Centralized immersive image rendering for thin client|
JP2010020581A|2008-07-11|2010-01-28|Shibaura Institute Of Technology|Image synthesizing system eliminating unnecessary objects|
US8654085B2|2008-08-20|2014-02-18|Sony Corporation|Multidimensional navigation for touch sensitive display|
US8176438B2|2008-09-26|2012-05-08|Microsoft Corporation|Multi-modal interaction for a screen magnifier|
US20100091119A1|2008-10-10|2010-04-15|Lee Kang-Eui|Method and apparatus for creating high dynamic range image|
KR20100070043A|2008-12-17|2010-06-25|삼성전자주식회사|Method for displaying scene recognition of digital image signal processing apparatus, medium for recording the method and digital image signal processing apparatus applying the method|
WO2011040864A1|2009-10-01|2011-04-07|Scalado Ab|Method relating to digital images|
EP2323102A1|2009-10-23|2011-05-18|ST-Ericsson SAS|Image capturing aid|
SE534551C2|2010-02-15|2011-10-04|Scalado Ab|Digital image manipulation including identification of a target area in a target image and seamless replacement of image information from a source image|
SE1150505A1|2011-05-31|2012-12-01|Mobile Imaging In Sweden Ab|Method and apparatus for taking pictures|WO2011040864A1|2009-10-01|2011-04-07|Scalado Ab|Method relating to digital images|
SE534551C2|2010-02-15|2011-10-04|Scalado Ab|Digital image manipulation including identification of a target area in a target image and seamless replacement of image information from a source image|
SE1150505A1|2011-05-31|2012-12-01|Mobile Imaging In Sweden Ab|Method and apparatus for taking pictures|
CA2841910A1|2011-07-15|2013-01-24|Mobile Imaging In Sweden Ab|Method of providing an adjusted digital image representation of a view, and an apparatus|
US9473702B2|2011-12-23|2016-10-18|Nokia Technologies Oy|Controlling image capture and/or controlling image processing|
GB201212521D0|2012-07-13|2012-08-29|Wapple Net Ltd|Drawing package|
JP6025470B2|2012-09-13|2016-11-16|キヤノン株式会社|Imaging apparatus, control method, program, and storage medium|
US8913147B2|2012-09-28|2014-12-16|Ebay, Inc.|Systems, methods, and computer program products for digital image capture|
US8913823B2|2012-10-04|2014-12-16|East West Bank|Image processing method for removing moving object and electronic device|
WO2014072567A1|2012-11-06|2014-05-15|Nokia Corporation|Method and apparatus for creating motion effect for image|
JP2014123261A|2012-12-21|2014-07-03|Sony Corp|Information processor and recording medium|
GB2510613A|2013-02-08|2014-08-13|Nokia Corp|User interface for image processing|
US9633460B2|2013-03-15|2017-04-25|Cyberlink Corp.|Systems and methods for seamless patch matching|
JP2014182638A|2013-03-19|2014-09-29|Canon Inc|Display control unit, display control method and computer program|
US10474921B2|2013-06-14|2019-11-12|Qualcomm Incorporated|Tracker assisted image capture|
US9659350B2|2014-01-31|2017-05-23|Morpho, Inc.|Image processing device and image processing method for image correction, and non-transitory computer readable recording medium thereof|
KR102356448B1|2014-05-05|2022-01-27|삼성전자주식회사|Method for composing image and electronic device thereof|
CN105100579B|2014-05-09|2018-12-07|华为技术有限公司|A kind of acquiring and processing method and relevant apparatus of image data|
GB2529182B|2014-08-12|2019-03-27|Supponor Oy|Method and apparatus for dynamic image content manipulation|
US9390487B2|2014-10-20|2016-07-12|Microsoft Technology Licensing, Llc|Scene exposure auto-compensation for differential image comparisons|
US9489727B2|2015-01-30|2016-11-08|Multimedia Image Solution Limited|Method for generating a preferred image by replacing a region of a base image|
CN106603903A|2015-10-15|2017-04-26|中兴通讯股份有限公司|Photo processing method and apparatus|
US9858965B2|2015-10-23|2018-01-02|Microsoft Technology Licensing, Llc|Video loop generation|
CN105867769B|2016-03-29|2017-06-16|广州阿里巴巴文学信息技术有限公司|picture switching method, device and user terminal|
EP3327662B1|2016-07-13|2020-09-23|Rakuten, Inc.|Image processing device, image processing method, and program|
US20190014269A1|2017-07-05|2019-01-10|Motorola Mobility Llc|Device with Lens, Bezel, and Mechanical Upright, and Corresponding Systems and Methods|
US10475222B2|2017-09-05|2019-11-12|Adobe Inc.|Automatic creation of a group shot image from a short video clip using intelligent select and merge|
GB2568278A|2017-11-10|2019-05-15|John Hudson Raymond|Image replacement system|
CN108182746A|2018-01-30|2018-06-19|百度在线网络技术(北京)有限公司|Control system, method and apparatus|
CN110647603A|2018-06-27|2020-01-03|百度在线网络技术(北京)有限公司|Image annotation information processing method, device and system|
US10748316B2|2018-10-12|2020-08-18|Adobe Inc.|Identification and modification of similar objects in vector images|
WO2021035228A2|2020-12-03|2021-02-25|Futurewei Technologies, Inc.|System and methods for photo in-painting of unwanted objects with auxiliary photos on smartphone|
法律状态:
优先权:
申请号 | 申请日 | 专利标题
SE1000142A|SE534551C2|2010-02-15|2010-02-15|Digital image manipulation including identification of a target area in a target image and seamless replacement of image information from a source image|SE1000142A| SE534551C2|2010-02-15|2010-02-15|Digital image manipulation including identification of a target area in a target image and seamless replacement of image information from a source image|
EP11153998.7A| EP2360644A3|2010-02-15|2011-02-10|Digital image manipulation|
EP16177377.5A| EP3104332A1|2010-02-15|2011-02-10|Digital image manipulation|
EP16177376.7A| EP3104331A1|2010-02-15|2011-02-10|Digital image manipulation|
US13/026,500| US8594460B2|2010-02-15|2011-02-14|Digital image manipulation|
US14/037,708| US9396569B2|2010-02-15|2013-09-26|Digital image manipulation|
US14/037,563| US9196069B2|2010-02-15|2013-09-26|Digital image manipulation|
[返回顶部]